641 research outputs found

    Research on object placement method based on trajectory recognition in Metaverse

    Get PDF
    Many studies focus on only one aspect while placing objects in virtual reality environment, such as efficiency, accuracy or interactivity. However, striking a balance between these aspects and taking into account multiple indicators is important as it is the key to improving user experience. Therefore, this paper proposes an efficient and interactive object placement method for recognizing controller trajectory in virtual reality environment. For creating user-friendly feedback, we visualize the intersection of the ray and the scene by linking the controller motion information and the ray. The trajectory is abstracted as point-clouds for matching, and the corresponding object is instantiated at the center of the trajectory. To verify the interactive performance and user satisfaction with this method, we carry out a study on user experience. The results show that both the efficiency and interaction interest are improved by applying our new method, which provides a good idea for the interactive design of virtual reality layout applications

    Neural Contourlet Network for Monocular 360 Depth Estimation

    Full text link
    For a monocular 360 image, depth estimation is a challenging because the distortion increases along the latitude. To perceive the distortion, existing methods devote to designing a deep and complex network architecture. In this paper, we provide a new perspective that constructs an interpretable and sparse representation for a 360 image. Considering the importance of the geometric structure in depth estimation, we utilize the contourlet transform to capture an explicit geometric cue in the spectral domain and integrate it with an implicit cue in the spatial domain. Specifically, we propose a neural contourlet network consisting of a convolutional neural network and a contourlet transform branch. In the encoder stage, we design a spatial-spectral fusion module to effectively fuse two types of cues. Contrary to the encoder, we employ the inverse contourlet transform with learned low-pass subbands and band-pass directional subbands to compose the depth in the decoder. Experiments on the three popular panoramic image datasets demonstrate that the proposed approach outperforms the state-of-the-art schemes with faster convergence. Code is available at https://github.com/zhijieshen-bjtu/Neural-Contourlet-Network-for-MODE.Comment: IEEE Transactions on Circuits and Systems for Video Technolog

    RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline Model and DoF-based Curriculum Learning

    Full text link
    The wide-angle lens shows appealing applications in VR technologies, but it introduces severe radial distortion into its captured image. To recover the realistic scene, previous works devote to rectifying the content of the wide-angle image. However, such a rectification solution inevitably distorts the image boundary, which changes related geometric distributions and misleads the current vision perception models. In this work, we explore constructing a win-win representation on both content and boundary by contributing a new learning model, i.e., Rectangling Rectification Network (RecRecNet). In particular, we propose a thin-plate spline (TPS) module to formulate the non-linear and non-rigid transformation for rectangling images. By learning the control points on the rectified image, our model can flexibly warp the source structure to the target domain and achieves an end-to-end unsupervised deformation. To relieve the complexity of structure approximation, we then inspire our RecRecNet to learn the gradual deformation rules with a DoF (Degree of Freedom)-based curriculum learning. By increasing the DoF in each curriculum stage, namely, from similarity transformation (4-DoF) to homography transformation (8-DoF), the network is capable of investigating more detailed deformations, offering fast convergence on the final rectangling task. Experiments show the superiority of our solution over the compared methods on both quantitative and qualitative evaluations. The code and dataset are available at https://github.com/KangLiao929/RecRecNet.Comment: Accepted to ICCV 202

    Deep Rectangling for Image Stitching: A Learning Baseline

    Full text link
    Stitched images provide a wide field-of-view (FoV) but suffer from unpleasant irregular boundaries. To deal with this problem, existing image rectangling methods devote to searching an initial mesh and optimizing a target mesh to form the mesh deformation in two stages. Then rectangular images can be generated by warping stitched images. However, these solutions only work for images with rich linear structures, leading to noticeable distortions for portraits and landscapes with non-linear objects. In this paper, we address these issues by proposing the first deep learning solution to image rectangling. Concretely, we predefine a rigid target mesh and only estimate an initial mesh to form the mesh deformation, contributing to a compact one-stage solution. The initial mesh is predicted using a fully convolutional network with a residual progressive regression strategy. To obtain results with high content fidelity, a comprehensive objective function is proposed to simultaneously encourage the boundary rectangular, mesh shape-preserving, and content perceptually natural. Besides, we build the first image stitching rectangling dataset with a large diversity in irregular boundaries and scenes. Experiments demonstrate our superiority over traditional methods both quantitatively and qualitatively.Comment: Accepted by CVPR2022 (oral); Codes and dataset: https://github.com/nie-lang/DeepRectanglin

    Learning Thin-Plate Spline Motion and Seamless Composition for Parallax-Tolerant Unsupervised Deep Image Stitching

    Full text link
    Traditional image stitching approaches tend to leverage increasingly complex geometric features (point, line, edge, etc.) for better performance. However, these hand-crafted features are only suitable for specific natural scenes with adequate geometric structures. In contrast, deep stitching schemes overcome the adverse conditions by adaptively learning robust semantic features, but they cannot handle large-parallax cases due to homography-based registration. To solve these issues, we propose UDIS++, a parallax-tolerant unsupervised deep image stitching technique. First, we propose a robust and flexible warp to model the image registration from global homography to local thin-plate spline motion. It provides accurate alignment for overlapping regions and shape preservation for non-overlapping regions by joint optimization concerning alignment and distortion. Subsequently, to improve the generalization capability, we design a simple but effective iterative strategy to enhance the warp adaption in cross-dataset and cross-resolution applications. Finally, to further eliminate the parallax artifacts, we propose to composite the stitched image seamlessly by unsupervised learning for seam-driven composition masks. Compared with existing methods, our solution is parallax-tolerant and free from laborious designs of complicated geometric features for specific scenes. Extensive experiments show our superiority over the SoTA methods, both quantitatively and qualitatively. The code will be available at https://github.com/nie-lang/UDIS2

    The Clinical Signifcance of Expression of ERCC1 and PKCalpha in Non-small Cell Lung Cancer

    Get PDF
    Background and objective Excision repair cross-complementing 1 (Excision-Repair Cross-Complementing 1, ERCC1), an important member of the DNA repair gene family, plays a key role in nucleotide excision repair and apoptosis of tumor cells. Protein kinase C-Ī± (Protein kinase C, PKCĪ±), an isozyme in protein kinase C family, is an important signaling molecule in signal transduction pathways of tumors, which has been implicated in malignant transformation and proliferation. The aim of this study was to explore the clinical significance of ERCC1 and PKCĪ± in non-small cell lung cancer (NSCLC). Methods The expression of ERCC1 and PKCĪ± were examined by immunohistochemistry (IHC) in the specimens of 51 cases of NSCLC patients tissue and 21 cases of paracancerous tissue. The relationship between detected data and patientsā€² clinical parameters was analyzed by SPSS 13.0 software. Results The positive expression rate of ERCC1 and PKCĪ± in NSCLC tissues was significantly higher than paracancerous tissues (Ī”<0.05). Expression of ERCC1 was closely related to clinical stage and N stage. The positive rate of ERCC1 was higher in III+IV or N1+N2 stage patients compared with I+II or N0 stage (Ī”=0.011, P=0.015). We also found that 5-year survival of negative group of ERCC1 was remarkably higher than that of positive group by Ļ‡2 test (Ī”<0.05). Expression of ERCC1 was positively correlative to PKCĪ± by Spearmanā€²s correlation analysis (r=0.425, P=0.002) in NSCLC. Conclusion The results suggest ERCC1 and PKCĪ± might be correlated with the development of NSCLC. ERCC1 might be related to prognosis of NSCLC. There might be existed a mechanism of coordination or regulation between ERCC1 and PKCĪ±
    • ā€¦
    corecore